翻訳と辞書
Words near each other
・ Arnold, Nebraska
・ Arnold, Nottinghamshire
・ Arnold, Nova Scotia
・ Arnold, Ohio
・ Arnold, Pennsylvania
・ Arnold, Texas
・ Arnold, Victoria
・ Arnold, West Virginia
・ Arnold, Wisconsin
・ Arnold-Palmer House
・ Arnoldas Bosas
・ Arnoldas Burkovskis
・ Arnoldas Lukošius
・ Arnoldi
・ Arnoldi Chronica Slavorum
Arnoldi iteration
・ Arnoldichthys spilopterus
・ Arnoldiella
・ Arnoldiellaceae
・ Arnoldist
・ Arnoldius
・ Arnoldius flavus
・ Arnoldius pusillus
・ Arnoldius scissor
・ Arnoldo Alemán
・ Arnoldo Camu
・ Arnoldo Castillo
・ Arnoldo Castro
・ Arnoldo Devonish
・ Arnoldo Foà


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Arnoldi iteration : ウィキペディア英語版
Arnoldi iteration
In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of iterative methods. Arnoldi finds the eigenvalues of general (possibly non-Hermitian) matrices; an analogous method for Hermitian matrices is the Lanczos iteration. The Arnoldi iteration was invented by W. E. Arnoldi in 1951.
The term ''iterative method'', used to describe Arnoldi, can perhaps be somewhat confusing. Note that all general eigenvalue algorithms must be iterative. This is not what is referred to when we say Arnoldi is an iterative method. Rather, Arnoldi belongs to a class of linear algebra algorithms (based on the idea of Krylov subspaces) that give a partial result after a relatively small number of iterations. This is in contrast to so-called ''direct methods'', which must complete to give any useful results.
Arnoldi iteration is a typical large sparse matrix algorithm: It does not access the elements of the matrix directly, but rather makes the matrix map vectors and makes its conclusions from their images. This is the motivation for building the Krylov subspace.
==Krylov subspaces and the power iteration==

An intuitive method for finding an eigenvalue (specifically the largest eigenvalue) of a given ''m'' × ''m'' matrix A is the power iteration. Starting with an initial random vector b, this method calculates ''Ab'', ''A''2''b'', ''A''3''b'',… iteratively storing and normalizing the result into ''b'' on every turn. This sequence converges to the eigenvector corresponding to the largest eigenvalue, \lambda_. However, much potentially useful computation is wasted by using only the final result, A^b. This suggests that instead, we form the so-called ''Krylov matrix'':
:K_ = \beginb & Ab & A^b & \cdots & A^b \end.
The columns of this matrix are not orthogonal, but in principle, we can extract an orthogonal basis, via a method such as Gram–Schmidt orthogonalization. The resulting vectors are a basis of the ''Krylov subspace'', \mathcal_. We may expect the vectors of this basis to give good approximations of the eigenvectors corresponding to the n largest eigenvalues, for the same reason that A^b approximates the dominant eigenvector.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Arnoldi iteration」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.